Skip to content ↓

Artificial intelligence model detects asymptomatic Covid-19 infections through cellphone-recorded coughs

Results might provide a convenient screening tool for people who may not suspect they are infected.
Press Inquiries

Press Contact:

Abby Abazorius
Phone: 617-253-2709
MIT News Office

Media Download

coughing graphic
Download Image
Caption: MIT researchers have found that people who are asymptomatic for Covid-19 may differ from healthy individuals in the way that they cough. These differences are not decipherable to the human ear. But it turns out that they can be picked up by artificial intelligence.
Credits: Image: Christine Daniloff, MIT

*Terms of Use:

Images for download on the MIT News office website are made available to non-commercial entities, press and the general public under a Creative Commons Attribution Non-Commercial No Derivatives license. You may not alter the images provided, other than to crop them to size. A credit line must be used when reproducing images; if one is not provided below, credit the images to "MIT."

Close
coughing graphic
Caption:
MIT researchers have found that people who are asymptomatic for Covid-19 may differ from healthy individuals in the way that they cough. These differences are not decipherable to the human ear. But it turns out that they can be picked up by artificial intelligence.
Credits:
Image: Christine Daniloff, MIT

Asymptomatic people who are infected with Covid-19 exhibit, by definition, no discernible physical symptoms of the disease. They are thus less likely to seek out testing for the virus, and could unknowingly spread the infection to others.

But it seems those who are asymptomatic may not be entirely free of changes wrought by the virus. MIT researchers have now found that people who are asymptomatic may differ from healthy individuals in the way that they cough. These differences are not decipherable to the human ear. But it turns out that they can be picked up by artificial intelligence.

In a paper published recently in the IEEE Journal of Engineering in Medicine and Biology, the team reports on an AI model that distinguishes asymptomatic people from healthy individuals through forced-cough recordings, which people voluntarily submitted through web browsers and devices such as cellphones and laptops.

The researchers trained the model on tens of thousands of samples of coughs, as well as spoken words. When they fed the model new cough recordings, it accurately identified 98.5 percent of coughs from people who were confirmed to have Covid-19, including 100 percent of coughs from asymptomatics — who reported they did not have symptoms but had tested positive for the virus.

The team is working on incorporating the model into a user-friendly app, which if FDA-approved and adopted on a large scale could potentially be a free, convenient, noninvasive prescreening tool to identify people who are likely to be asymptomatic for Covid-19. A user could log in daily, cough into their phone, and instantly get information on whether they might be infected and therefore should confirm with a formal test.

“The effective implementation of this group diagnostic tool could diminish the spread of the pandemic if everyone uses it before going to a classroom, a factory, or a restaurant,” says co-author Brian Subirana, a research scientist in MIT’s Auto-ID Laboratory.

Subirana’s co-authors are Jordi Laguarta and Ferran Hueto, of MIT’s Auto-ID Laboratory.

Vocal sentiments

Prior to the pandemic’s onset, research groups already had been training algorithms on cellphone recordings of coughs to accurately diagnose conditions such as pneumonia and asthma. In similar fashion, the MIT team was developing AI models to analyze forced-cough recordings to see if they could detect signs of Alzheimer’s, a disease associated with not only memory decline but also neuromuscular degradation such as weakened vocal cords.

They first trained a general machine-learning algorithm, or neural network, known as ResNet50, to discriminate sounds associated with different degrees of vocal cord strength. Studies have shown that the quality of the sound “mmmm” can be an indication of how weak or strong a person’s vocal cords are. Subirana trained the neural network on an audiobook dataset with more than 1,000 hours of speech, to pick out the word “them” from other words like “the” and “then.”

The team trained a second neural network to distinguish emotional states evident in speech, because Alzheimer’s patients — and people with neurological decline more generally — have been shown to display certain sentiments such as frustration, or having a flat affect, more frequently than they express happiness or calm. The researchers developed a sentiment speech classifier model by training it on a large dataset of actors intonating emotional states, such as neutral, calm, happy, and sad.

The researchers then trained a third neural network on a database of coughs in order to discern changes in lung and respiratory performance.

Finally, the team combined all three models, and overlaid an algorithm to detect muscular degradation. The algorithm does so by essentially simulating an audio mask, or layer of noise, and distinguishing strong coughs — those that can be heard over the noise — over weaker ones.

With their new AI framework, the team fed in audio recordings, including of Alzheimer’s patients, and found it could identify the Alzheimer’s samples better than existing models. The results showed that, together, vocal cord strength, sentiment, lung and respiratory performance, and muscular degradation were effective biomarkers for diagnosing the disease.

When the coronavirus pandemic began to unfold, Subirana wondered whether their AI framework for Alzheimer’s might also work for diagnosing Covid-19, as there was growing evidence that infected patients experienced some similar neurological symptoms such as temporary neuromuscular impairment.

“The sounds of talking and coughing are both influenced by the vocal cords and surrounding organs. This means that when you talk, part of your talking is like coughing, and vice versa. It also means that things we easily derive from fluent speech, AI can pick up simply from coughs, including things like the person’s gender, mother tongue, or even emotional state. There’s in fact sentiment embedded in how you cough,” Subirana says. “So we thought, why don’t we try these Alzheimer’s biomarkers [to see if they’re relevant] for Covid.”

“A striking similarity”

In April, the team set out to collect as many recordings of coughs as they could, including those from Covid-19 patients. They established a website where people can record a series of coughs, through a cellphone or other web-enabled device. Participants also fill out a survey of symptoms they are experiencing, whether or not they have Covid-19, and whether they were diagnosed through an official test, by a doctor’s assessment of their symptoms, or if they self-diagnosed. They also can note their gender, geographical location, and native language.

To date, the researchers have collected more than 70,000 recordings, each containing several coughs, amounting to some 200,000 forced-cough audio samples, which Subirana says is “the largest research cough dataset that we know of.” Around 2,500 recordings were submitted by people who were confirmed to have Covid-19, including those who were asymptomatic.

The team used the 2,500 Covid-associated recordings, along with 2,500 more recordings that they randomly selected from the collection to balance the dataset. They used 4,000 of these samples to train the AI model. The remaining 1,000 recordings were then fed into the model to see if it could accurately discern coughs from Covid patients versus healthy individuals.

Surprisingly, as the researchers write in their paper, their efforts have revealed “a striking similarity between Alzheimer’s and Covid discrimination.”

Without much tweaking within the AI framework originally meant for Alzheimer’s, they found it was able to pick up patterns in the four biomarkers — vocal cord strength, sentiment, lung and respiratory performance, and muscular degradation — that are specific to Covid-19. The model identified 98.5 percent of coughs from people confirmed with Covid-19, and of those, it accurately detected all of the asymptomatic coughs.

“We think this shows that the way you produce sound, changes when you have Covid, even if you’re asymptomatic,” Subirana says.

Asymptomatic symptoms

The AI model, Subirana stresses, is not meant to diagnose symptomatic people, as far as whether their symptoms are due to Covid-19 or other conditions like flu or asthma. The tool’s strength lies in its ability to discern asymptomatic coughs from healthy coughs.  

The team is working with a company to develop a free pre-screening app based on their AI model. They are also partnerning with several hospitals around the world to collect a larger, more diverse set of cough recordings, which will help to train and strengthen the model’s accuracy.

As they propose in their paper, “Pandemics could be a thing of the past if pre-screening tools are always on in the background and constantly improved.”

Ultimately, they envision that audio AI models like the one they’ve developed may be incorporated into smart speakers and other listening devices so that people can conveniently get an initial assessment of their disease risk, perhaps on a daily basis.

Press Mentions

Economist

Research scientist Brian Subirana speaks with The Economist’s Babbage podcast about his work developing a new AI system that could be used to help diagnose people asymptomatic Covid-19.

Mashable

Mashable reporter Rachel Kraus writes that a new system developed by MIT researchers could be used to help identify patients with Covid-19. Kraus writes that the algorithm can “differentiate the forced coughs of asymptomatic people who have Covid from those of healthy people.”

BBC News

A new algorithm developed by MIT researchers could be used to help detect people with Covid-19 by listening to the sound of their coughs, reports Zoe Kleinman for BBC News. “In tests, it achieved a 98.5% success rate among people who had received an official positive coronavirus test result, rising to 100% in those who had no other symptoms,” writes Kleinman.

Gizmodo

A new took developed by MIT researchers uses neural networks to help identify Covid-19, reports Alyse Stanley for Gizmodo. The model “can detect the subtle changes in a person’s cough that indicate whether they’re infected, even if they don’t have any other symptoms,” Stanley explains.

CBS Boston

MIT researchers have developed a new AI model that could help identify people with asymptomatic Covid-19 based on the sound of their cough, reports CBS Boston. The researchers hope that in the future the model could be used to help create an app that serves as a “noninvasive prescreening tool to figure out who is likely to have the coronavirus.”

TechCrunch

TechCrunch reporter Devin Coldewey writes that MIT researchers have built a new AI model that can help detect Covid-19 by listening to the sound of a person’s cough. “The tool is detecting features that allow it to discriminate the subjects that have COVID from the ones that don’t,” explains Brian Subirana, a research scientist in MIT’s Auto-ID Laboratory.

Related Links

Related Topics

Related Articles

More MIT News

Gene Keselman headshot

Faces of MIT: Gene Keselman

At MIT, Keselman is a lecturer, executive director, managing director, and innovator. Additionally, he is a colonel in the Air Force Reserves, board director, and startup leader.

Read full story